# Information Retrieval Enhancement
Ruri V3 Reranker 310m Preview
Apache-2.0
This is a preview version of a Japanese general-purpose reranking model, trained on the cl-nagoya/ruri-v3-pt-310m base model, specifically designed for Japanese text relevance ranking tasks.
Text Embedding Japanese
R
cl-nagoya
79
0
Reranker Msmarco MiniLM L12 H384 Uncased Lambdaloss
Apache-2.0
This is a cross-encoder model fine-tuned on MiniLM-L12-H384-uncased for text re-ranking and semantic search tasks.
Text Embedding English
R
tomaarsen
1,019
3
Modernbert Large Msmarco Bpr
This is a sentence-transformers model fine-tuned from ModernBERT-large, designed to map sentences and paragraphs into a 1024-dimensional dense vector space, supporting tasks such as semantic textual similarity and semantic search.
Text Embedding
M
BlackBeenie
21
2
Mxbai Rerank Base V1
Apache-2.0
This is a Transformer-based Reranker model primarily used for information retrieval and search result optimization tasks.

M
khoj-ai
81
1
Bge Reranker Large Q4 K M GGUF
MIT
This model is converted from BAAI/bge-reranker-large into GGUF format for reranking tasks, supporting both Chinese and English.
Text Embedding Supports Multiple Languages
B
DrRos
164
1
Polish Reranker Roberta V2
An improved Polish re-ranking model based on sdadas/polish-roberta-large-v2, trained with RankNet loss function and supports Flash Attention 2 acceleration
Text Embedding
Transformers Other

P
sdadas
961
2
T5 Query Reformulation RL
Apache-2.0
This is a generative model specifically designed for search query rewriting, employing a sequence-to-sequence architecture and reinforcement learning framework to produce diverse and relevant query rewrites.
Large Language Model
Transformers Supports Multiple Languages

T
prhegde
366
6
Followir 7B
Apache-2.0
FollowIR-7B is an instruction retrieval model fine-tuned based on Mistral-7B-Instruct-v0.2, focusing on reranking in retrieval tasks.
Large Language Model
Transformers English

F
jhu-clsp
39
15
Splade V3
SPLADE-v3 is the latest generation of the SPLADE model series, developed based on SPLADE++SelfDistil, employing a hybrid training approach combining KL divergence and MarginMSE for information retrieval tasks.
Text Embedding
Transformers English

S
naver
84.86k
40
Anita
Apache-2.0
A sentence transformer model specifically designed for Italian Q&A tasks, capable of parsing and analyzing Italian text to predict the most likely context containing answers.
Question Answering System
Transformers Other

A
DeepMount00
134
26
Msmarco Distilbert Base V4 Feature Extraction Pipeline
Apache-2.0
This is a sentence transformer model based on DistilBERT, specifically designed for feature extraction and sentence similarity calculation.
Text Embedding
Transformers

M
questgen
36
0
Splade Cocondenser Ensembledistil
SPLADE model for passage retrieval, improving sparse neural information retrieval through knowledge distillation
Text Embedding
Transformers English

S
naver
606.73k
42
Splade Cocondenser Selfdistil
SPLADE model for passage retrieval, improving retrieval effectiveness through sparse latent document expansion and knowledge distillation techniques
Text Embedding
Transformers English

S
naver
16.11k
10
Doc2query T5 Large Msmarco
Doc2Query is a model for document retrieval that converts documents into queries to improve the effectiveness of information retrieval.
Large Language Model
D
castorini
15
1
Monot5 Base Med Msmarco
A document re-ranking model based on the T5-base architecture, fine-tuned on both the MS MARCO and medical-domain MedMARCO datasets to optimize the relevance ranking of retrieval results.
Large Language Model
M
castorini
153
1
Monot5 Large Msmarco
This model is a reranker based on the T5-large architecture, fine-tuned for 100,000 steps (i.e., 10 epochs) on the MS MARCO passage dataset, primarily used for document and passage reranking tasks.
Large Language Model
M
castorini
254
3
Dense Encoder Msmarco Distilbert Word2vec256k MLM 445k Emb Updated
A sentence embedding model trained on the MS MARCO dataset, using a word2vec-initialized 256k vocabulary and DistilBERT architecture, suitable for semantic search and sentence similarity tasks
Text Embedding
Transformers

D
vocab-transformers
29
0
Bert Base Mdoc Bm25
Apache-2.0
This is a text re-ranking model trained on the MS MARCO document dataset for BM25 retrievers, primarily used to improve the ranking effectiveness of document retrieval.
Text Embedding English
B
Luyu
3,668
1
Featured Recommended AI Models